Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Am J Bioeth ; 23(12): 57-59, 2023 12.
Article in English | MEDLINE | ID: mdl-38010672
3.
J Eur CME ; 10(1): 1989243, 2021.
Article in English | MEDLINE | ID: mdl-34804636

ABSTRACT

Health data bear great promises for a healthier and happier life, but they also make us vulnerable. Making use of millions or billions of data points, Machine Learning (ML) and Artificial Intelligence (AI) are now creating new benefits. For sure, harvesting Big Data can have great potentials for the health system, too. It can support accurate diagnoses, better treatments and greater cost effectiveness. However, it can also have undesirable implications, often in the sense of undesired side effects, which may in fact be terrible. Examples for this, as discussed in this article, are discrimination, the mechanisation of death, and genetic, social, behavioural or technological selection, which may imply eugenic effects or social Darwinism. As many unintended effects become visible only after years, we still lack sufficient criteria, long-term experience and advanced methods to reliably exclude that things may go terribly wrong. Handing over decision-making, responsibility or control to machines, could be dangerous and irresponsible. It would also be in serious conflict with human rights and our constitution.

4.
Front Robot AI ; 5: 15, 2018.
Article in English | MEDLINE | ID: mdl-33500902

ABSTRACT

Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a "responsibility gap" for harms caused by these systems. To address these concerns, the principle of "meaningful human control" has been introduced in the legal-political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what "meaningful human control" exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of "guidance control" as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of "Responsible Innovation" and "Value-sensitive Design," our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a "tracking" condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a "tracing" condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.

6.
Front Genet ; 9: 31, 2018.
Article in English | MEDLINE | ID: mdl-29487613

ABSTRACT

Personalized medicine uses fine grained information on individual persons, to pinpoint deviations from the normal. 'Digital Twins' in engineering provide a conceptual framework to analyze these emerging data-driven health care practices, as well as their conceptual and ethical implications for therapy, preventative care and human enhancement. Digital Twins stand for a specific engineering paradigm, where individual physical artifacts are paired with digital models that dynamically reflects the status of those artifacts. When applied to persons, Digital Twins are an emerging technology that builds on in silico representations of an individual that dynamically reflect molecular status, physiological status and life style over time. We use Digital Twins as the hypothesis that one would be in the possession of very detailed bio-physical and lifestyle information of a person over time. This perspective redefines the concept of 'normality' or 'health,' as a set of patterns that are regular for a particular individual, against the backdrop of patterns observed in the population. This perspective also will impact what is considered therapy and what is enhancement, as can be illustrated with the cases of the 'asymptomatic ill' and life extension via anti-aging medicine. These changes are the consequence of how meaning is derived, in case measurement data is available. Moral distinctions namely may be based on patterns found in these data and the meanings that are grafted on these patterns. Ethical and societal implications of Digital Twins are explored. Digital Twins imply a data-driven approach to health care. This approach has the potential to deliver significant societal benefits, and can function as a social equalizer, by allowing for effective equalizing enhancement interventions. It can as well though be a driver for inequality, given the fact that a Digital Twin might not be an accessible technology for everyone, and given the fact that patterns identified across a population of Digital Twins can lead to segmentation and discrimination. This duality calls for governance as this emerging technology matures, including measures that ensure transparency of data usage and derived benefits, and data privacy.

7.
Sci Eng Ethics ; 25(6): 1789-1797, 2019 12.
Article in English | MEDLINE | ID: mdl-28028778

ABSTRACT

In the twenty-first century, the urgent problems the world is facing (the UN Sustainable Development Goals) are increasingly related to vast and intricate 'systems of systems', which comprise both socio-technical and eco-systems. In order for engineers to adequately and responsibly respond to these problems, they cannot focus on only one technical or any other aspect in isolation, but must adopt a wider and multidisciplinary perspective of these systems, including an ethical and social perspective. Engineering curricula should therefore focus on what we call 'comprehensive engineering'. Comprehensive engineering implies ethical coherence, consilience of scientific disciplines, and cooperation between parties.


Subject(s)
Ethics, Professional , Sustainable Development , Curriculum , Engineering , Humans , Students , United Nations
8.
Nanoethics ; 5(3): 269-283, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22247745

ABSTRACT

Although applications are being developed and have reached the market, nanopharmacy to date is generally still conceived as an emerging technology. Its concept is ill-defined. Nanopharmacy can also be construed as a converging technology, which combines features of multiple technologies, ranging from nanotechnology to medicine and ICT. It is still debated whether its features give rise to new ethical issues or that issues associated with nanopharma are merely an extension of existing issues in the underlying fields. We argue here that, regardless of the alleged newness of the ethical issues involved, developments occasioned by technological advances affect the roles played by stakeholders in the field of nanopharmacy to such an extent that this calls for a different approach to responsible innovation in this field. Specific features associated with nanopharmacy itself and features introduced to the associated converging technologies- bring about a shift in the roles of stakeholders that call for a different approach to responsibility. We suggest that Value Sensitive Design is a suitable framework to involve stakeholders in addressing moral issues responsibly at an early stage of development of new nanopharmaceuticals.

9.
Sci Eng Ethics ; 18(1): 143-55, 2012 Mar.
Article in English | MEDLINE | ID: mdl-21533834

ABSTRACT

When thinking about ethics, technology is often only mentioned as the source of our problems, not as a potential solution to our moral dilemmas. When thinking about technology, ethics is often only mentioned as a constraint on developments, not as a source and spring of innovation. In this paper, we argue that ethics can be the source of technological development rather than just a constraint and technological progress can create moral progress rather than just moral problems. We show this by an analysis of how technology can contribute to the solution of so-called moral overload or moral dilemmas. Such dilemmas typically create a moral residue that is the basis of a second-order principle that tells us to reshape the world so that we can meet all our moral obligations. We can do so, among other things, through guided technological innovation.


Subject(s)
Engineering/ethics , Moral Obligations , Problem Solving , Technology/ethics , Humans
SELECTION OF CITATIONS
SEARCH DETAIL